13 research outputs found

    A Max-Product EM Algorithm for Reconstructing Markov-tree Sparse Signals from Compressive Samples

    Full text link
    We propose a Bayesian expectation-maximization (EM) algorithm for reconstructing Markov-tree sparse signals via belief propagation. The measurements follow an underdetermined linear model where the regression-coefficient vector is the sum of an unknown approximately sparse signal and a zero-mean white Gaussian noise with an unknown variance. The signal is composed of large- and small-magnitude components identified by binary state variables whose probabilistic dependence structure is described by a Markov tree. Gaussian priors are assigned to the signal coefficients given their state variables and the Jeffreys' noninformative prior is assigned to the noise variance. Our signal reconstruction scheme is based on an EM iteration that aims at maximizing the posterior distribution of the signal and its state variables given the noise variance. We construct the missing data for the EM iteration so that the complete-data posterior distribution corresponds to a hidden Markov tree (HMT) probabilistic graphical model that contains no loops and implement its maximization (M) step via a max-product algorithm. This EM algorithm estimates the vector of state variables as well as solves iteratively a linear system of equations to obtain the corresponding signal estimate. We select the noise variance so that the corresponding estimated signal and state variables obtained upon convergence of the EM iteration have the largest marginal posterior distribution. We compare the proposed and existing state-of-the-art reconstruction methods via signal and image reconstruction experiments.Comment: To appear in IEEE Transactions on Signal Processin

    Sparse Signal Reconstruction from Quantized Noisy Measurements via GEM Hard Thresholding

    Full text link

    Bayesian Complex Amplitude Estimation and Adaptive Matched Filter Detection in Low-Rank Interference

    Full text link

    Projected Nesterov’s Proximal-Gradient Algorithm for Sparse Signal Recovery

    No full text
    We develop a projected Nesterov's proximal-gradient (PNPG) approach for sparse signal reconstruction that combines adaptive step size with Nesterov's momentum acceleration. The objective function that we wish to minimize is the sum of a convex differentiable data-fidelity (negative log-likelihood (NLL)) term and a convex regularization term. We apply sparse signal regularization where the signal belongs to a closed convex set within the closure of the domain of the NLL; the convex-set constraint facilitates flexible NLL domains and accurate signal recovery. Signal sparsity is imposed using the ℓ1 -norm penalty on the signal's linear transform coefficients. The PNPG approach employs a projected Nesterov's acceleration step with restart and a duality-based inner iteration to compute the proximal mapping. We propose an adaptive step-size selection scheme to obtain a good local majorizing function of the NLL and reduce the time spent backtracking. Thanks to step-size adaptation, PNPG converges faster than the methods that do not adjust to the local curvature of the NLL. We present an integrated derivation of the momentum acceleration and proofs of O(k−2) objective function convergence rate and convergence of the iterates, which account for adaptive step size, inexactness of the iterative proximal mapping, and the convex-set constraint. The tuning of PNPG is largely application independent. Tomographic and compressed-sensing reconstruction experiments with Poisson generalized linear and Gaussian linear measurement models demonstrate the performance of the proposed approach.This is a manuscript of an article published as Gu, Renliang, and Aleksandar Dogandžić. "Projected Nesterov's Proximal-Gradient Algorithm for Sparse Signal Recovery." IEEE Transactions on Signal Processing 65, no. 13 (2017): 3510-525. doi:10.1109/TSP.2017.2691661. Posted with permission.</p

    Projected Nesterov's Proximal-Gradient Algorithm for Sparse Signal Recovery

    No full text

    Bayesian NDE Defect Signal Analysis

    No full text

    beam hardening correction and linearization function fitting

    No full text
    <p>The evolution of the beam hardening correction algorithm (NPG-BFGS), which requires no more information than the conventional filtered back projection (FBP) method.</p
    corecore